134 research outputs found
ALVIC versus the Internet: Redesigning a Networked Virtual Environment Architecture
The explosive growth of the number of applications based on networked virtual environment technology, both games and virtual communities, shows that these types of applications have become commonplace in a short period of time. However, from a research point of view, the inherent weaknesses in their architectures are quickly exposed. The Architecture for Large-Scale Virtual Interactive Communities (ALVICs) was originally developed to serve as a generic framework to deploy networked virtual environment applications on the Internet. While it has been shown to effectively scale to the numbers originally put forward, our findings have shown that, on a real-life network, such as the Internet, several drawbacks will not be overcome in the near future. It is, therefore, that we have recently started with the development of ALVIC-NG, which, while incorporating the findings from our previous research, makes several improvements on the original version, making it suitable for deployment on the Internet as it exists today
The diminishing role of hubs in dynamical processes on complex networks
It is notoriously difficult to predict the behaviour of a complex
self-organizing system, where the interactions among dynamical units form a
heterogeneous topology. Even if the dynamics of each microscopic unit is known,
a real understanding of their contributions to the macroscopic system behaviour
is still lacking. Here we develop information-theoretical methods to
distinguish the contribution of each individual unit to the collective
out-of-equilibrium dynamics. We show that for a system of units connected by a
network of interaction potentials with an arbitrary degree distribution, highly
connected units have less impact on the system dynamics as compared to
intermediately connected units. In an equilibrium setting, the hubs are often
found to dictate the long-term behaviour. However, we find both analytically
and experimentally that the instantaneous states of these units have a
short-lasting effect on the state trajectory of the entire system. We present
qualitative evidence of this phenomenon from empirical findings about a social
network of product recommendations, a protein-protein interaction network, and
a neural network, suggesting that it might indeed be a widespread property in
nature.Comment: Published versio
Bayesian population receptive field modelling
We introduce a probabilistic (Bayesian) framework and associated software
toolbox for mapping population receptive fields (pRFs) based on fMRI data. This
generic approach is intended to work with stimuli of any dimension and is
demonstrated and validated in the context of 2D retinotopic mapping. The
framework enables the experimenter to specify generative (encoding) models of
fMRI timeseries, in which experimental manipulations enter a pRF model of
neural activity, which in turns drives a nonlinear model of neurovascular
coupling and Blood Oxygenation Level Dependent (BOLD) response. The neuronal
and haemodynamic parameters are estimated together on a voxel-by-voxel or
region-of-interest basis using a Bayesian estimation algorithm (variational
Laplace). This offers several novel contributions to receptive field modelling.
The variance / covariance of parameters are estimated, enabling receptive
fields to be plotted while properly representing uncertainty about pRF size and
location. Variability in the haemodynamic response across the brain is
accounted for. Furthermore, the framework introduces formal hypothesis testing
to pRF analysis, enabling competing models to be evaluated based on their model
evidence (approximated by the variational free energy), which represents the
optimal tradeoff between accuracy and complexity. Using simulations and
empirical data, we found that parameters typically used to represent pRF size
and neuronal scaling are strongly correlated, which should be taken into
account when making inferences. We used the framework to compare the evidence
for six variants of pRF model using 7T functional MRI data and we found a
circular Difference of Gaussians (DoG) model to be the best explanation for our
data overall. We hope this framework will prove useful for mapping stimulus
spaces with any number of dimensions onto the anatomy of the brain.Comment: 30 pages, 10 figures. Code available at
https://github.com/pzeidman/BayespR
Inferring causation from time series in Earth system sciences
The heart of the scientific enterprise is a rational effort to understand the causes behind the phenomena we observe. In large-scale complex dynamical systems such as the Earth system, real experiments are rarely feasible. However, a rapidly increasing amount of observational and simulated data opens up the use of novel data-driven causal methods beyond the commonly adopted correlation techniques. Here, we give an overview of causal inference frameworks and identify promising generic application cases common in Earth system sciences and beyond. We discuss challenges and initiate the benchmark platform causeme.net to close the gap between method users and developers. © 2019, The Author(s)
Consensus guidelines for the use and interpretation of angiogenesis assays
The formation of new blood vessels, or angiogenesis, is a complex process that plays important roles in growth and development, tissue and organ regeneration, as well as numerous pathological conditions. Angiogenesis undergoes multiple discrete steps that can be individually evaluated and quantified by a large number of bioassays. These independent assessments hold advantages but also have limitations. This article describes in vivo, ex vivo, and in vitro bioassays that are available for the evaluation of angiogenesis and highlights critical aspects that are relevant for their execution and proper interpretation. As such, this collaborative work is the first edition of consensus guidelines on angiogenesis bioassays to serve for current and future reference
Performance Evaluation of Client-Side Video Stream Quality Selection using Autonomous Avatars ABSTRACT
In this paper we present test results performed on our framework for networked virtual environment (NVE) applications that incorporates real-time video communication between avatars. The primary goal of the architecture is to provide efficient scalability for large scale networked virtual environments. To realize this, our solution maximizes client responsibilities and relies on direct client-to-client communication streams. By employing multiple multicast groups to channel the video streams, we achieve bandwidth adaptation at client side with minimal server intervention. This results in a reduced server-load and at the same time guarantees a highly scalable end-result, depending only on the available processing power of the individual connected clients. 1. INTRODUCTION AND RELATE
Information processing as a paradigm to model and simulate complex systems
Abstract is not available in fulltext
Improving CEMA using Correlation Optimization
Sensitive cryptographic information, e.g. AES secret keys, can be extracted from the electromagnetic (EM) leakages unintentionally emitted by a device using techniques such as Correlation Electromagnetic Analysis (CEMA). In this paper, we introduce Correlation Optimization (CO), a novel approach that improves CEMA attacks by formulating the selection of useful EM leakage samples in a trace as a machine learning optimization problem. To this end, we propose the correlation loss function, which aims to maximize the Pearson correlation between a set of EM traces and the true AES key during training. We show that CO works with high-dimensional and noisy traces, regardless of time-domain trace alignment and without requiring prior knowledge of the power consumption characteristics of the cryptographic hardware. We evaluate our approach using the ASCAD benchmark dataset and a custom dataset of EM leakages from an Arduino Duemilanove, captured with a USRP B200 SDR. Our results indicate that the masked AES implementation used in all three ASCAD datasets can be broken with a shallow Multilayer Perceptron model, whilst requiring only 1,000 test traces on average. A similar methodology was employed to break the unprotected AES implementation from our custom dataset, using 22,000 unaligned and unfiltered test traces
- …